#LLM OPTIMIZATION
Explore tagged Tumblr posts
aidesignemea · 5 days ago
Text
LLM SEO EXPERT
LLM Applications in SEO
I'm observing how LLMs are being applied across various SEO functions. In content creation, they assist with drafting, outlining, optimizing titles and meta descriptions, and validating content quality based on E-E-A-T principles. For keyword research, LLMs are moving beyond simple keyword matching to understanding search intent and context, helping to identify long-tail keywords and trending topics. In technical SEO, they aid with schema markup, site architecture simplification, voice search optimization, and identifying technical issues like broken links. A new concept, 'LLM optimization' (LLMO) or 'AI Engine Optimization' (AEO), is emerging, focusing on content discoverability across conversational AI tools.
0 notes
creating-by-starlight · 2 months ago
Text
Love having to explain in detail to profs why they can't just trust everything Chat spits out. My favorite thing.
4 notes · View notes
network-rail · 6 months ago
Text
In order to maximize disruption across the network this holiday season, we’ll be using AI to determine what sections of the network to close for maintenance. It’s already suggested Clapham Junction and the entire West Coast Main Line, so things are going well.
3 notes · View notes
drchristophedelongsblog · 23 days ago
Text
Kinexa: Revolutionizing Medicine and Sport with Advanced AI
Kinexa stands at the forefront of a technological revolution, poised to redefine medicine and sport through its sophisticated integration of Artificial intelligence (AI), Large Language Models (LLMs), and computer vision. This powerful combination allows Kinexa to move beyond traditional methods, offering unprecedented precision, personalization, and proactive care.
Transforming Medicine
Kinexa's capabilities bring a new era of medical insight and efficiency:
Enhanced Diagnostics and Treatment: With AI, Kinexa can analyze vast quantities of patient data, from medical histories to diagnostic images, to assist in more accurate and earlier diagnoses. It can identify subtle patterns that might be missed by the human eye, suggesting optimal treatment pathways and predicting patient responses.
Personalized Rehabilitation and Monitoring: Computer vision is a game-changer for rehabilitation. It can precisely track patient movements, assess progress in real-time, and detect minute deviations from correct form. This enables highly personalized exercise regimens and continuous, objective monitoring, whether in a clinic or at home.
Intelligent Information Access: Kinexa's LLM can process and synthesize complex medical literature, research papers, and patient records. This allows healthcare professionals to quickly access relevant information, understand complex conditions, and even generate comprehensive reports, freeing up valuable time for direct patient care.
Proactive Health Management: By continuously analyzing data and visual cues, Kinexa can help identify potential health risks before they escalate. This shifts the focus from reactive treatment to proactive prevention, empowering both patients and practitioners.
Innovating Sport Performance
In the world of sport, Kinexa offers a competitive edge and enhanced safety:
Optimized Training and Performance: Computer vision provides immediate, objective feedback on an athlete's biomechanics, form, and technique. It can identify inefficiencies, highlight areas for improvement, and ensure proper execution of exercises, leading to optimized training programs and peak performance.
Advanced Injury Prevention: By analyzing movement patterns with incredible detail, Kinexa can detect subtle indicators of fatigue or improper form that could lead to injury. This proactive identification allows coaches and trainers to intervene, adjust training loads, and implement preventative measures, significantly reducing injury risk.
Personalized Athlete Development: Kinexa's AI can process an athlete's performance data, physiological metrics, and training history to create highly individualized development plans. This ensures that training is tailored to each athlete's unique needs and goals, maximizing their potential.
Data-Driven Coaching: The insights generated by Kinexa's AI and computer vision empower coaches with concrete, data-driven evidence to make more informed decisions about training, recovery, and game-day strategies. The LLM can also help summarize complex performance reports, making key insights easily digestible.
Kinexa is not just a tool; it's a transformative platform that embodies the future of healthcare and athletic achievement. By merging cutting-edge AI, LLM, and computer vision, Kinexa is setting new standards for precision, personalization, and proactive management in both medicine and sport.
Go further
0 notes
b-a-r-c-l-a-y · 1 month ago
Text
youtube
What AI is Doing to Your Writing Skills... by Dillon Gerrity
0 notes
geo-guru · 2 months ago
Text
Tumblr media
How to create and implement llms.txt file in your website : Step by Step Guide
if u want to learn how u can create implement it so click on the link below:
https://getaimonitor.com/step-by-step-to-create-and-implement-llms-txt-file/
0 notes
theenterprisemac · 4 months ago
Text
Oh Dear, Why would someone write this?
Let's just dive in, because there is just so much to deal with in this post.
"ChatGPT emerged just two years ago, dramatically altering our expectations of AI. Pinecone, a vector database powering some of the most advanced AI applications, wasn't even part of the conversation six months ago. This isn't mere change, it's a fundamental shift in the velocity of innovation. As enterprises, we're no longer building on stable ground, we're constructing our future on a landscape that's continuously evolving. This offers both exciting opportunities and significant challenges for businesses aiming to maintain their competitive edge."
This is just such willful drivel. Technology was accelerating and we were operating on less than stable ground long before AI came around. The perception of increased instability now is an artifact of the fact that AI right now is so unreliable and people are trying to apply it to everything without thought as to if they should. We are creating the instability through an over eagerness to apply AI–driven by the fact that companies who peddle AI haven't found a good way to make money off of it, and are now just trying to force it down our throats.
"One promising strategy is the empowerment of non-IT professionals through low-code platforms and AI-augmented development tools. These tools are catalysts for a new era of fusion development, enabling those outside of traditional IT roles to contribute to the development process. This approach not only alleviates the burden on professional developers but also brings domain expertise directly into the development process. The result is a more agile, responsive organization capable of rapidly adapting to changing business needs. However, the goal isn't to replace professional developers. Rather, it's about freeing them to focus on more complex, high-value tasks that truly require their expertise. By offloading routine development work to business users with domain knowledge, teams can maximize the impact of scarce professional development resources."
This might have a modicum of truth, and probably rings true to people who don't develop software, but the sad fact is that only people who don't know anything about software development think that AI has improved it.
For the most part AI is actually pretty awful at writing code. Is it getting better, sure. Is it good enough where you could offload any code completely to the AI–no. This move to AI and low-code is just the same mistake as was made many years back when developers became overly reliant on frameworks–the mistake we are now paying the price for that in security and in bloat.
Low-code and AI just hide the problem behind the shoulder shrug of I don't know what happened or how it was made. This is actually a step back not forward.
"The integration of AI into the development process represents a significant opportunity for boosting productivity. AI assistants can generate boilerplate code, suggest optimizations and, in the near future, even prototype entire applications based on high-level descriptions."
I love when people who don't code write stuff like this. AI isn't close to this. I know there are people who do this, but the software they put out is not well made, optimal or secure. There is ample evidence that AI writes poor code. To broaden the example lets look at companies that apply AI to security such as Cylance.
As you may recall Cylance touted their machine learning/AI approach to security as a universal panacea. However, it didn't take long for that to be disproven. It turned out that you could fool the system into allowing even obviously malicious programs.
I think this trend really has its roots in this future that people are so desperate for where computers will be better than people and take over. I am not sure where this comes from, but we aren't close and trying to will ourselves into it is a bad idea.
"The true power of AI in boosting productivity lies not just in coding assistance, but in its ability to infuse intelligence into entire workflows and business processes. This goes beyond mere automation. It's about creating adaptive, intelligent systems that can learn and improve over time."
I don't know what "infuse intelligence" into workflows means. This is just marketing hype for AI. This idea that they learn or improve over time–we simply aren't there yet. I use numerous different LLMs and the code they generate is not great universally. Are they getting better sure–but we are a long way off from this fictional future.
You see this come up in other forms such as people talking about AI training AI–which I hate so much its incredible. If you just look at how inaccurate AI is today and think about it training more AI with its inaccurate information. It is such a dystopian future run by bullshit factories that dispense the bullshit down our throats faster than we can deal with it. If AI teaching AI is the future than we are a LONG ways away from it.
"By embracing AI-augmented development practices, effectively managing APIs in the evolving API economy and cultivating a high-performance development culture, enterprises can position themselves to respond quickly to technological changes. The strategic use of low-code platforms and AI-powered tools, combined with modern API management systems and a culture that values continuous learning and experimentation, allows organizations to adapt swiftly to new challenges and opportunities."
The amount of marketing jargon in this paragraph is amazing. It's sort of the used car salesman selling the rusty future while trying to tell you he is selling you brand new and clean. I just wanted to include it, because it's priceless.
"The future of application development isn't about having the most developers or the biggest budget. It's about being the most adaptive, efficient and innovative in resource utilization. By focusing on these principles, organizations can transform the challenges of the evolving technological landscape into unprecedented opportunities for growth and success."
The future of application development has ALWAYS been for those who best adapt, innovate and are most efficient. Nothing has changed–this is just as true now as it has ever been.
I will leave you with this: beware of people like this who write this sort of optimistic drivel who aren't in the space beyond selling it–they are not architecting the future and they have no business defining it. Don't let them! The future should belong to those who help to build it, and people like this who peddle their ignorant dreams have no place in that future!
2 notes · View notes
techahead-software-blog · 8 months ago
Text
RAG vs Fine-Tuning: Choosing the Right Approach for Building LLM-Powered Chatbots
Tumblr media
Imagine having an ultra-intelligent assistant ready to answer any question. Now, imagine making it even more capable, specifically for tasks you rely on most. That’s the power—and the debate—behind Retrieval-Augmented Generation (RAG) and Fine-Tuning. These methods act as “training wheels,” each enhancing your AI’s capabilities in unique ways.
RAG brings in current, real-world data whenever the model needs it, perfect for tasks requiring constant updates. Fine-Tuning, on the other hand, ingrains task-specific knowledge directly into the model, tailoring it to your exact needs. Selecting between them can dramatically influence your AI’s performance and relevance.
Whether you’re building a customer-facing chatbot, automating tailored content, or optimizing an industry-specific application, choosing the right approach can make all the difference. 
This guide will delve into the core contrasts, benefits, and ideal use cases for RAG and Fine-Tuning, helping you pinpoint the best fit for your AI ambitions.
Key Takeaways:
Retrieval-Augmented Generation (RAG) and Fine-Tuning are two powerful techniques for enhancing Large Language Models (LLMs) with distinct advantages.
RAG is ideal for applications requiring real-time information updates, leveraging external knowledge bases to deliver relevant, up-to-date responses.
Fine-Tuning excels in accuracy for specific tasks, embedding task-specific knowledge directly into the model’s parameters for reliable, consistent performance.
Hybrid approaches blend the strengths of both RAG and Fine-Tuning, achieving a balance of real-time adaptability and domain-specific accuracy.
What is RAG?
Retrieval-Augmented Generation (RAG) is an advanced technique in natural language processing (NLP) that combines retrieval-based and generative models to provide highly relevant, contextually accurate responses to user queries. Developed by OpenAI and other leading AI researchers, RAG enables systems to pull information from extensive databases, knowledge bases, or documents and use it as part of a generated response, enhancing accuracy and relevance.
How RAG Works?
Tumblr media
Retrieval Step
When a query is received, the system searches through a pre-indexed database or corpus to find relevant documents or passages. This retrieval process typically uses dense embeddings, which are vector representations of text that help identify the most semantically relevant information.
 Generation Step
The retrieved documents are then passed to a generative model, like GPT or a similar transformer-based architecture. This model combines the query with the retrieved information to produce a coherent, relevant response. The generative model doesn’t just repeat the content but rephrases and contextualizes it for clarity and depth.
Combining Outputs
The generative model synthesizes the response, ensuring that the answer is not only relevant but also presented in a user-friendly way. The combined information often makes RAG responses more informative and accurate than those generated by standalone generative models.
Advantages of RAG
Tumblr media
Improved Relevance
By incorporating external, up-to-date sources, RAG generates more contextually accurate responses than traditional generative models alone.
Reduced Hallucination
One of the significant issues with purely generative models is “hallucination,” where they produce incorrect or fabricated information. RAG mitigates this by grounding responses in real, retrieved content.
Scalability
RAG can integrate with extensive knowledge bases and adapt to vast amounts of information, making it ideal for enterprise and research applications.
Enhanced Context Understanding
By pulling from a wide variety of sources, RAG provides a richer, more nuanced understanding of complex queries.
Real-World Knowledge Integration
For companies needing up-to-date or specialized information (e.g., medical databases, and legal documents), RAG can incorporate real-time data, ensuring the response is as accurate and current as possible.
Disadvantages of RAG
Tumblr media
Computational Intensity
RAG requires both retrieval and generation steps, demanding higher processing power and memory, making it more expensive than traditional NLP models.
Reliance on Database Quality
The accuracy of RAG responses is highly dependent on the quality and relevance of the indexed knowledge base. If the corpus lacks depth or relevance, the output can suffer.
Latency Issues
The retrieval and generation process can introduce latency, potentially slowing response times, especially if the retrieval corpus is vast.
Complexity in Implementation
Setting up RAG requires both an effective retrieval system and a sophisticated generative model, increasing the technical complexity and maintenance needs.
Bias in Retrieved Data
Since RAG relies on existing data, it can inadvertently amplify biases or errors present in the retrieved sources, affecting the quality of the generated response.
What is Fine-Tuning?
Tumblr media
Fine-tuning is a process in machine learning where a pre-trained model (one that has been initially trained on a large dataset) is further trained on a more specific, smaller dataset. This step customizes the model to perform better on a particular task or within a specialized domain. Fine-tuning adjusts the weights of the model so that it can adapt to nuances in the new data, making it highly relevant for specific applications, such as medical diagnostics, legal document analysis, or customer support.
How Fine-Tuning Works?
Tumblr media
Pre-Trained Model Selection
A model pre-trained on a large, general dataset (like GPT trained on a vast dataset of internet text) serves as the foundation. This model already understands a wide range of language patterns, structures, and general knowledge.
Dataset Preparation
A specific dataset, tailored to the desired task or domain, is prepared for fine-tuning. This dataset should ideally contain relevant and high-quality examples of what the model will encounter in production.
Training Process
During fine-tuning, the model is retrained on the new dataset with a lower learning rate to avoid overfitting. This step adjusts the pre-trained model’s weights so that it can capture the specific patterns, terminology, or context in the new data without losing its general language understanding.
Evaluation and Optimization
The fine-tuned model is tested against a validation dataset to ensure it performs well. If necessary, hyperparameters are adjusted to further optimize performance.
Deployment
Once fine-tuning yields satisfactory results, the model is ready for deployment to handle specific tasks with improved accuracy and relevancy.
Advantages of Fine-Tuning
Tumblr media
Enhanced Accuracy
Fine-tuning significantly improves the model’s performance on domain-specific tasks since it adapts to the unique vocabulary and context of the target domain.
Cost-Effectiveness
It’s more cost-effective than training a new model from scratch. Leveraging a pre-trained model saves computational resources and reduces time to deployment.
Task-Specific Customization
Fine-tuning enables customization for niche applications, like customer service responses, medical diagnostics, or legal document summaries, where specialized vocabulary and context are required.
Reduced Data Requirements
Fine-tuning typically requires a smaller dataset than training a model from scratch, as the model has already learned fundamental language patterns from the pre-training phase.
Scalability Across Domains
The same pre-trained model can be fine-tuned for multiple specialized tasks, making it highly adaptable across different applications and industries.
Disadvantages of Fine-Tuning
Tumblr media
Risk of Overfitting
If the fine-tuning dataset is too small or lacks diversity, the model may overfit, meaning it performs well on the fine-tuning data but poorly on new inputs.
Loss of General Knowledge
Excessive fine-tuning on a narrow dataset can lead to a loss of general language understanding, making the model less effective outside the fine-tuned domain.
Data Sensitivity
Fine-tuning may amplify biases or errors present in the new dataset, especially if it’s not balanced or representative.
Computation Costs
While fine-tuning is cheaper than training from scratch, it still requires computational resources, which can be costly for complex models or large datasets.
Maintenance and Updates
Fine-tuned models may require periodic retraining or updating as new domain-specific data becomes available, adding to maintenance costs.
Key Difference Between RAG and Fine-Tuning
Tumblr media
 
Key Trade-Offs to Consider
Tumblr media
Data Dependency 
RAG’s dynamic data retrieval means it’s less dependent on static data, allowing accurate responses without retraining.
Cost and Time
Fine-tuning is computationally demanding and time-consuming, yet yields highly specialized models for specific use cases.
Dynamic Vs Static Knowledge
RAG benefits from dynamic, up-to-date retrieval, while fine-tuning relies on stored static knowledge, which may age.
When to Choose Between RAG and Fine-Tuning?
RAG shines in applications needing vast and frequently updated knowledge, like tech support, research tools, or real-time summarization. It minimizes retraining requirements but demands a high-quality retrieval setup to avoid inaccuracies. Example: A chatbot using RAG for product recommendations can fetch real-time data from a constantly updated database.
Fine-tuning excels in tasks needing domain-specific knowledge, such as medical diagnostics, content generation, or document reviews. While demanding quality data and computational resources, it delivers consistent results post-training, making it well-suited for static applications. Example: A fine-tuned AI model for document summarization in finance provides precise outputs tailored to industry-specific language.
the right choice is totally depended on the use case of your LLM chatbot. Take the necessary advantages and disadvantages in the list and choose the right fit for your custom LLM development.
Hybrid Approaches: Leveraging RAG and Fine-Tuning Together
Rather than favoring either RAG or fine-tuning, hybrid approaches combine the strengths of both methods. This approach fine-tunes the model for domain-specific tasks, ensuring consistent and precise performance. At the same time, it incorporates RAG’s dynamic retrieval for real-time data, providing flexibility in volatile environments.
Optimized for Precision and Real-Time Responsiveness
Tumblr media
With hybridization, the model achieves high accuracy for specialized tasks while adapting flexibly to real-time information. This balance is crucial in environments that require both up-to-date insights and historical knowledge, such as customer service, finance, and healthcare.
Fine-Tuning for Domain Consistency: By fine-tuning, hybrid models develop strong, domain-specific understanding, offering reliable and consistent responses within specialized contexts.
RAG for Real-Time Adaptability: Integrating RAG enables the model to access external information dynamically, keeping responses aligned with the latest data.
Ideal for Data-Intensive Industries: Hybrid models are indispensable in fields like finance, healthcare, and customer service, where both past insights and current trends matter. They adapt to new information while retaining industry-specific precision.
Versatile, Cost-Effective Performance
Hybrid approaches maximize flexibility without extensive retraining, reducing costs in data management and computational resources. This approach allows organizations to leverage existing fine-tuned knowledge while scaling up with dynamic retrieval, making it a robust, future-proof solution.
Conclusion
Choosing between RAG and Fine-Tuning depends on your application’s requirements. RAG delivers flexibility and adaptability, ideal for dynamic, multi-domain needs. It provides real-time data access, making it invaluable for applications with constantly changing information.
Fine-Tuning, however, focuses on domain-specific tasks, achieving greater precision and efficiency. It’s perfect for tasks where accuracy is non-negotiable, embedding knowledge directly within the model.
Hybrid approaches blend these benefits, offering the best of both. However, these solutions demand thoughtful integration for optimal performance, balancing flexibility with precision.
At TechAhead, we excel in delivering custom AI app development around specific business objectives. Whether implementing RAG, Fine-Tuning, or a hybrid approach, our expert team ensures AI solutions drive impactful performance gains for your business.
Source URL: https://www.techaheadcorp.com/blog/rag-vs-fine-tuning-difference-for-chatbots/
0 notes
jcmarchi · 8 months ago
Text
End GPU underutilization: Achieve peak efficiency
New Post has been published on https://thedigitalinsider.com/end-gpu-underutilization-achieve-peak-efficiency/
End GPU underutilization: Achieve peak efficiency
Tumblr media Tumblr media
AI and deep learning inference demand powerful AI accelerators, but are you truly maximizing yours?
GPUs often operate at a mere 30-40% utilization, squandering valuable silicon, budget, and energy.
In this live session, NeuReality’s Field CTO, Iddo Kadim, tackles the critical challenge of maximizing AI accelerator capability. Whether you build, borrow, or buy AI acceleration – this is a must-attend.
Date: Thursday, December 5 Time: 10 AM PST | 5 PM GMT Location: Online
Iddo will reveal a multi-faceted approach encompassing intelligent software, optimized APIs, and efficient AI inference instructions to unlock benchmark-shattering performance for ANY AI accelerator.
The result?
You’ll get more from the GPUs buy, rather than buying more GPUs to make up for the limitations of today’s CPU and NIC-reliant inference architectures. And, you’ll likely achieve superior system performance within your current energy and cost constraints. 
Your key takeaways:
The urgency of GPU optimization: Is mediocre utilization hindering your AI initiatives? Discover new approaches to achieve 100% utilization with superior performance per dollar and per watt leading to greater energy efficiency.
Factors impacting utilization: Master the key metrics that influence GPU utilization: compute usage, memory usage, and memory bandwidth.
Beyond hardware: Harness the power of intelligent software and APIs. Optimize AI data pre-processing, compute graphs, and workload routing to maximize your AI accelerator (XPU, ASIC, FPGA) investments.
Smart options to explore: Uncover the root causes of underutilized AI accelerators and explore modern solutions to remedy them. You’ll get a summary of recent LLM real-world performance results – made possible by pairing NeuReality’s NR1 server-on-a-chip with any GPU or AI accelerator.
You spent a fortune on your GPUs – don’t let them sit idle for any amount of time.
1 note · View note
dandelionstep · 1 year ago
Text
everyone wants to sensationalize about ai datacenter water usage no one wants to talk about the ai you can run locally thus completely mitigating contribution to that issue. because they don't GAF about the environment they just hate ai
0 notes
txttletale · 2 months ago
Text
like, i said this in my longer post on this but i think it bears repeating on its own because it really helps to understand the whole dynamic at play here: talk to anyone with experience working in yknow, the greater buzzfeed content slop industry, and they will tell you that before LLMs came along, it was already one robot (whatever SEO optimization tool your workplace paid for) writing for another robot (google search algorithm) -- with any human writers serving primarily as a middleman between the two
913 notes · View notes
geo-guru · 3 months ago
Text
0 notes
theenterprisemac · 4 months ago
Text
Can't Get Away from AI Optimism
It never ceases to amaze me how many people are so optimistic about AI, and yet the results of AI to date are pretty marginal. I read recently another post by someone claiming IT folks are going to be replaced in 3-5 years likely at entry level support roles. They then proceeded to list off a bunch of half finished and not super functional technology that has yet to mature as evidence.
The question of can basic tasks be replaced by a ChatBot? Sure, however, I think its an open question what tasks, but I think there is potential here. However, is this doom and gloom for people in low-level IT support in the next 3-5 years? Probably not.
Here is the real issue: ChatBots today proclaim many false things with confidence and are wrong as often or more often than they are right. Even with a doubling in accuracy I suspect that they still will only be competent to cover the bottom 20-30% of support questions.
I think the more likely scenario will be that the ChatBot simply initiates human built workflows via a chat interface. I am not sure why people want this, but many seem to.
I think we are a long way away from the level of frictionless interaction required to make this low-level AI ChatBot future a reality in any sort of useful or lasting way.
If you want to talk about risk as a low-level IT support person–here is the real risk:
Technology is complex
Technology moves fast
The number of simple tasks that people are still expected to do that aren't regularly automated is shrinking
At some point AI might eat your lunch in the long-term
All that being said, if you are a low-level IT support person don't worry. Keep learning grow your skills. Find productive areas in IT and move into those–don't stay in low-level support. If you do this, then you will be fine.
0 notes
youzicha · 9 months ago
Text
Google Docs provides language-model driven autocorrect, where it highlights "unlikely" strings of words and offers to replace them for you.
It makes a lot of sense. But it's also a bit ironic, given that statistical language models started with Shannon's information theory, which identifies "information" with surprisal. Like, you could imagine if you just keep right-clicking the text, eventually Google will rewrite it for you into something carrying zero bits of information.
That's actually what LLM-written texts try to optimize for. Which I believe you can notice!
Tumblr media
There is no "idea" in the poem except what is forced on it by the prompt; the text generation algorithm specifically tries to put as few bits as possible into the text. The more clichéd the poem is, the closer it is to optimal.
Sometimes humans fall into the same trap. I keep thinking about the editor who changed "a feeling of jealousy" into "a pang of jealously", replacing the phrase by a cliché and deliberately erasing the bits of information contained in it.
695 notes · View notes
ms-demeanor · 1 year ago
Text
The thing about LLMs is that they're like cars that have a touchscreen on the console; it's more expensive and worse than what came before in almost all circumstances.
And like car touchscreens it's something that I suspect that the vast majority of consumers dislike and would prefer not to use.
But that doesn't mean that touchscreens are bad it just means they don't belong in cars.
It *IS* a massive problem that "AI" is being shoved at us by a bunch of people who invested WAY too much into AI and are trying to make a return on their investment. It is *ALSO* a problem that "AI" is a terrible name for the pattern interpretation tools that tech companies have dumped billions of dollars into so people are being told that a lot of things that are just pretty basic algorithmic tools are "AI," which makes the whole thing feel overhyped, oversold, and useless (which it is for a tremendous number of people!)
But I get really frustrated with claims that AI slop is what ruined google search (google search has been ruined for a long time; when their goal became "people need to do more searches so we can serve them more ads" instead of "we need to return the best results for our users" it was destroyed and that had nothing to do with AI and everything to do with a profit motive) or that AI is why we're being inundated by spammers (spammers have been a problem for a VERY long time) or that it's impossible to find good info these days because the internet is full of garbage AI articles to generate clicks (that has been the BANE OF MY EXISTENCE in terms of research for MUCH LONGER than GPT4 has been around it is called search engine optimization and if you haven't had your results full of poorly written non-information listicles for the last seven years I suspect you haven't been doing quite the same volume of searching as I have been).
These are known problems that are being exacerbated by this particular kind of tool, but the problem with phishing isn't that the emails are extremely tailored to particular users, it's that the world is chock full of scammers who are incentivized to treat people like shit for money.
973 notes · View notes
disgustedorite · 1 month ago
Text
"why don't you have a laptop already it's 2025" more room for doohickeys and I have literally never had a good laptop experience. And unlike my phone they don't come in "can survive being spilled on or dropped in a puddle"
severe thunderstorms but i want to draw and its a desktop augh
6 notes · View notes